-
Notifications
You must be signed in to change notification settings - Fork 910
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix policy_based_routing.sh script on simple-nva module #1226
Conversation
…ltiple LB associated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @simonebruzzechesse .
As we discussed earlier f2f, we'll need to still go through a few modifications before merging:
- run the PBR config as the last step so others can be executed immediately
- use an infinite loop in background to periodically check if the user added LBs later. For each of them check whether an HC route has already been configured. If not, configure it. Then sleep for X seconds
modules/cloud-config-container/simple-nva/files/policy_based_routing.sh
Outdated
Show resolved
Hide resolved
Review the script since previous version was not taking into account load balancer configured after instance creation. In the newer version policy_based_routing.sh script runs on background which allows routing table to be configure properly via the start_routing.sh script. The policy_based_routing.sh script will check every two seconds for new load balancer routes configured from the guest agent, for each LB available on that network interface it will check if the policy based route is configured, otherwise it will do that. Overall this should solve the previous state issue with network interfaces not having LB associated (stuck in the while loop without configuring VM routes) as well as managing more than 1 LB associated to the network interface (i believe the previous version of the script would have failed in such circumstance. |
Fixed policy_based_routing.sh script on simple-nva module when dealing with network interfaces either not associated to an internal TCP Load Balancer (infinite loop in while cycle) or in case of multiple LB pointing to the NIC (probably ip route command failing with multiple line response received from ip r show table local).
As per documentation added in the script executing a curl request to http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/$IF_NUMBER/forwarded-ips/ would return "" when no load balancer is associated to that network interface or a new line separated list of indexes (0 1 ..), one for each load balancer associated. A further request to http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/$IF_NUMBER/forwarded-ips/[0,1,..] will return the specific LB ip address.
We could evaluate an improvement using that curl instead of waiting for the route to be installed on the local routing table by the guest agent.